Sensitivity Analysis of Transversal RLS Algorithms With Correlated Inputs
نویسندگان
چکیده
In this contribution it is shown that the sensitivity analysis predicts that the mean square excess error is the same for both correlated and white inputs. Correlation increases the variance about the mean square excess error. We also present a stable finite precision RLS algorithm. Many adaptive filtering problems can be cast as a systems identification problem. Consider the desired signal d(n) which is a linear mapping of the sequence x(n) by the weights w*, N-l d(n) = x(n-i) w i* = x '(n) w* i-0 In the Recursive Least Squares algorithm , the weights w (n) are calculated such that the accumulated sum of the error residuals is minimized. The error residual is the difference between the desired response and an estimate of the desired response obtained from w ( n ) . The conventional RLS algorithm can then be stated as follows [4] : w(n) = w(n1) + k(n) e (n) (1 .7) With initialization: w(0) = 0 P(0) = 8-11 (6 << 1) h is the exponential forgetting factor and is equal to 1, for the prewindowed growing memory case where all data is weighted equally . 2. Sensit ivity Analysis of t h e Transversa l RLS A l g o r i t h m In a recent paper [3] it was shown that, one way to analyze finite precision effects is indirectly thrdugh the study of the sensitivity of the RLS algorithm to perturbations in the filter coefficients. That analysis was based on expanding the deviation due to random perturbations in the filter coefficient from the optimum error power, which is minimized by the RLS algorithm, in a Taylor series involving second order panial derivatives. This is necessary. since the ( 1 , 1 ) first order derivatives are zero due to ;he optimum RLS filter formulation. Defining C(n) as the optimum error power that is (1 .2) minimized in the RLS algorithm, we have ISCAS '89 1744 CH2692-2/89/OOUO1744 $1 .oO O 1989 IEEE (2.1) The mean value of the error power deviation due to random perturbations, 6i(n) in the weights w(n), for k=l. was derived as [3] 2 At(") = n N EO, and ox2 is the variance of input samples. N is the order of filter and n is the number of iterations . It should be noted that, the mean of the deviation is the same for equal power white uncorrelated and correlated signals. This is also verified in simulations of fixed point implementation (which will be shown below). However, the variance of the deviation increases for correlated signals. The variance for white Gaussian input samples is [31, 2 2 2 2 oA$n) =n E N o ,
منابع مشابه
On the sensitivity of transversal RLS algorithms to random perturbations in the filter coefficients
Transversal Recursive Least Squares (RLS) algorithms estimate filter coefficients which minimize the accumulated sum of the square of the error residuals termed the error power. In this paper the sensitivity of this error power to random perturbations about the optimum filter coefficients is investigated. Expressions are derived for the mean and variance of the deviation from the optimum error ...
متن کاملFast Transversal Recursive least-squares (FT-RLS) Algorithm
In this paper a brief overview of the Fast Transversal Recursive Least-Squares (FT-RLS) algorithm is provided. This algorithm is designed to provide similar performance to the standard RLS algorithm while reducing the computation order. This is accomplished by a combination of four transversal filters used in unison. Finite precision effects are also briefly discussed. Simulations are performed...
متن کاملAlgorithms and Architectures for Split Recursive Least Squares
In this paper, a new computationally efficient algorithm for recursive least-squares (RLS) filtering is presented. The proposed Split RLS algorithm can perform the approximated RLS with O ( N ) complexity for signals having no special data structure t o be exploited. Our performance analysis shows that the estimation bias will be small when the input data are less correlated. We also show that ...
متن کاملQRD - RLS Adaptive Filtering
The main limitation of FQRD-RLS algorithms is that they lack an explicit weight vector term. Furthermore, they do not directly provide the variables allowing for a straightforward computation of the weight vector as is the case with the conventional QRD-RLS algorithm, where a back-substitution procedure can be used to compute the coefficients. Therefore, the applications of the FQRD-RLS algorit...
متن کاملHardware Implementation of Fx-t Recursive Least Mean Square Algorithm for Effective Noise Cancellation Using Tms 320c5x Processor
This paper proposes the model for Recursive Least Square (RLS) algorithm and Fast Transversal Recursive Least Square (FxT-RLS) Algorithm for effective noise cancellation in acoustics and speech signal processing. The designed model gives the results in a noise free signal as output for RLS and FxT-RLS Algorithm. The filter used here is adaptive filter and the algorithm used is Recursive Least S...
متن کامل